-
Notifications
You must be signed in to change notification settings - Fork 0
expose-floor-data-types #33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Sandro Cantarella <sandro@Sandros-Mac-mini.fritz.box>
Signed-off-by: Sandro Cantarella <sandro@79f3d049-9006-400d-9954-2e5dcd250fa9.fritz.box>
Signed-off-by: Sandro Cantarella <sandro@Sandros-Mac-mini.fritz.box>
Signed-off-by: SCA075 <82227818+sca075@users.noreply.github.com>
- Add FloorData and TrimsData to imports from config.types - Add both types to __all__ list for public API - Enables MQTT Vacuum Camera to import these types for multi-floor support
WalkthroughThis pull request introduces comprehensive image rendering and color management infrastructure for map visualization, including new color management and drawable utility classes, handlers for different vacuum formats (Hypfer and Rand256), and extensive test coverage. A legacy element map generator is removed, and public API exports are updated to include floor and trim data types. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Areas requiring extra attention:
Possibly related PRs
Suggested labels
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Signed-off-by: SCA075 <82227818+sca075@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 20
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (11)
tests/test_mvcrender.py-71-74 (1)
71-74: String formatting bug: ITERATIONS not interpolated.The f-string on line 73 has
{ITERATIONS}inside a regular string, so it won't be interpolated. This will print the literal text{ITERATIONS}instead of the value100.🔎 Proposed fix
# Test 1: AutoCrop with rotation (most complex operation) print(f"\n{'=' * 70}") -print("Test 1: AutoCrop with rotation ({ITERATIONS} iterations)") +print(f"Test 1: AutoCrop with rotation ({ITERATIONS} iterations)") print(f"{'=' * 70}")tests/convert_rand_to_hypfer.py-309-312 (1)
309-312: Guard against division by zero.If there are no segments or all segments are empty,
compressed_pixelscould be zero, causing aZeroDivisionError.🔎 Proposed fix
print(f"Original pixel data: {original_pixels} values") print(f"Compressed pixel data: {compressed_pixels} values") - print(f"Compression ratio: {original_pixels / compressed_pixels:.2f}x") - print(f"Memory reduction: {(1 - compressed_pixels/original_pixels) * 100:.1f}%") + if compressed_pixels > 0 and original_pixels > 0: + print(f"Compression ratio: {original_pixels / compressed_pixels:.2f}x") + print(f"Memory reduction: {(1 - compressed_pixels/original_pixels) * 100:.1f}%") + else: + print("No pixel data to calculate compression ratio")backups/drawable_ori.py-346-415 (1)
346-415: Inconsistent coordinate ordering between_filled_circleand_filled_circle_optimized.These two similar methods use opposite coordinate conventions:
_filled_circle(line 358):y, x = center_filled_circle_optimized(line 399):x, y = centerThis inconsistency is error-prone and could cause subtle bugs if a caller switches between these methods.
backups/drawable_ori.py-34-40 (1)
34-40: Comment color description doesn't match the actual color value.The comment says "Red color for error messages" for
ERROR_OUTLINE, but the color value(0, 0, 0, 255)is black (RGBA). Similarly,ERROR_COLORcomment says "Red color with lower opacity for error outlines" which is inconsistent with the first comment.🔎 Proposed fix
- ERROR_OUTLINE: Color = (0, 0, 0, 255) # Red color for error messages - ERROR_COLOR: Color = ( + ERROR_OUTLINE: Color = (0, 0, 0, 255) # Black color for error outlines + ERROR_COLOR: Color = ( 255, 0, 0, 191, - ) # Red color with lower opacity for error outlines + ) # Red color with lower opacity for error fillnew_tests/TEST_RESULTS_SUMMARY.md-6-11 (1)
6-11: Documentation inconsistency: statistics conflict with detailed sections.The summary states "131 (100%) ✅" passing, but subsequent sections (lines 25-57) describe multiple failing tests with specific counts (e.g., "1 Failure" in test_shared.py, "4 Failures" in test_drawable.py). Either update the statistics to reflect actual state or clarify that the numbers represent tests after fixes were applied.
new_tests/IMPLEMENTATION_SUMMARY.md-23-29 (1)
23-29: Documentation conflict with TEST_RESULTS_SUMMARY.md.This file reports 78% passing (108/138), while TEST_RESULTS_SUMMARY.md claims 100% passing (131/131). Consider reconciling these documents or adding timestamps to clarify they represent different points in time.
tests/rand_rooms_test.py-232-233 (1)
232-233: Variable shadowing:room_datahas two different meanings.Line 232 assigns
room_data = dest_json.get("rooms", [])(a list of room info), then line 265 assignsroom_data = {...}(a single room's properties). This shadowing can cause confusion and potential bugs.🔎 Proposed fix
# Process destinations JSON to get room names dest_json = destinations - room_data = dest_json.get("rooms", []) - room_id_to_data = {room["id"]: room for room in room_data} + dest_rooms = dest_json.get("rooms", []) + room_id_to_data = {room["id"]: room for room in dest_rooms}Also applies to: 265-271
tests/rooms_test.py-71-125 (1)
71-125:os.makedirswith empty directory path will fail.Line 121 calls
os.makedirs(os.path.dirname(test_data_path), exist_ok=True)wheretest_data_path = "test.json". Sinceos.path.dirname("test.json")returns an empty string, this will raise an error on some systems.🔎 Proposed fix
- os.makedirs(os.path.dirname(test_data_path), exist_ok=True) + dir_name = os.path.dirname(test_data_path) + if dir_name: + os.makedirs(dir_name, exist_ok=True)tests/test_hypfer_profiling.py-243-251 (1)
243-251: PotentialNameErrorforcpu_profilerin finally block.If an exception occurs before line 193 assigns
cpu_profiler, the reference on line 247 will raiseNameError. Initialize it before the try block:start_time = time.time() + cpu_profiler = None if self.profiler: self.profiler.take_memory_snapshot("Before Image Generation") cpu_profiler = self.profiler.start_cpu_profile("Hypfer Image Generation")Committable suggestion skipped: line range outside the PR's diff.
backups/hypfer_rooms_handler.py-1-6 (1)
1-6: Typo: "Hipfer" should be "Hypfer".The module docstring and class docstring use "Hipfer" instead of "Hypfer" consistently. This should match the naming convention used elsewhere in the codebase.
""" -Hipfer Rooms Handler Module. -Handles room data extraction and processing for Valetudo Hipfer vacuum maps. +Hypfer Rooms Handler Module. +Handles room data extraction and processing for Valetudo Hypfer vacuum maps.backups/rand25_handler_rooms.py-97-99 (1)
97-99: Return type annotation is incorrect.The method signature declares
RoomsPropertiesas return type, but the actual return on line 164 istuple[RoomsProperties, zone_properties, point_properties](a 3-tuple).🔎 Proposed fix
async def extract_room_properties( self, json_data: JsonType, destinations: JsonType - ) -> RoomsProperties: + ) -> tuple[RoomsProperties, Any, Any]: """Extract the room properties."""
🧹 Nitpick comments (47)
tests/debug_manage_rooms.py (1)
13-16: Consider improving error handling for better debugging experience.The current exception handling could be enhanced:
- Errors should be printed to
stderrrather thanstdout(standard convention).- The script should exit with a non-zero code on failure for better integration with automation tools.
🔎 Proposed improvements
try: asyncio.run(main()) except Exception: - traceback.print_exc(file=sys.stdout) + traceback.print_exc(file=sys.stderr) + sys.exit(1)tests/PROFILING_README.md (1)
54-81: Add language specifiers to fenced code blocks.The markdown linter flags missing language specifiers for fenced code blocks at lines 54, 67, and 76. These appear to be example output text.
🔎 Proposed fix to add language specifiers
-``` +```text 🔍 Memory Usage Timeline:-``` +```text ⚡ CPU USAGE ANALYSIS-``` +```text ⏱️ TIMING ANALYSIStests/VALETUDO_MAP_PARSER_TYPES_USAGE_REPORT.md (1)
1-373: Documentation provides useful reference for library usage patterns.This comprehensive usage report documents the valetudo_map_parser types well. A few minor markdown linting improvements could be made:
- Tables (lines 226-260) should be surrounded by blank lines per MD058
- The fenced code block at line 337 should have a language specifier (e.g.,
textorplaintext)- Line 372 uses emphasis instead of a heading (MD036)
These are cosmetic and don't affect the content's utility.
tests/analyze_room_connections.py (2)
104-127: Brute-force distance calculation may be slow for large rooms.The nested loops computing minimum distance have O(n×m) complexity where n and m are pixel counts. For rooms with thousands of pixels, this can be quite slow. Consider using
scipy.spatial.distance.cdistfor vectorized computation.🔎 Proposed optimization using cdist
+from scipy.spatial.distance import cdist + if len(room2_indices[0]) > 0 and len(room7_indices[0]) > 0: - # Calculate the minimum distance between Room 2 and Room 7 - min_distance = float("inf") - closest_point_room2 = None - closest_point_room7 = None - - for i in range(len(room2_indices[0])): - y2, x2 = room2_indices[0][i], room2_indices[1][i] - for j in range(len(room7_indices[0])): - y7, x7 = room7_indices[0][j], room7_indices[1][j] - distance = np.sqrt((x2 - x7) ** 2 + (y2 - y7) ** 2) - if distance < min_distance: - min_distance = distance - closest_point_room2 = (x2, y2) - closest_point_room7 = (x7, y7) + # Vectorized distance calculation + coords2 = np.column_stack((room2_indices[1], room2_indices[0])) # (x, y) + coords7 = np.column_stack((room7_indices[1], room7_indices[0])) + distances = cdist(coords2, coords7) + min_idx = np.unravel_index(np.argmin(distances), distances.shape) + min_distance = distances[min_idx] + closest_point_room2 = (coords2[min_idx[0], 0], coords2[min_idx[0], 1]) + closest_point_room7 = (coords7[min_idx[1], 0], coords7[min_idx[1], 1])
152-154: Unused output directory creation.The output directory is created but never used. The commented-out matplotlib import suggests visualization was planned. Consider removing this dead code or implementing the visualization.
- # Create a text-based visualization of the rooms - output_dir = os.path.join(script_dir, "output") - os.makedirs(output_dir, exist_ok=True)tests/test_mvcrender.py (1)
10-22: Non-Pythonic semicolons in class definition.The semicolons at the end of statements (lines 14, 15, 18, 20) are unnecessary in Python and reduce readability.
🔎 Proposed cleanup
class DummyShared: def __init__(self): self.trims = type("T", (), {"to_dict": lambda self: {"trim_up": 0, "trim_down": 0, "trim_left": 0, "trim_right": 0}})() - self.offset_top = 0; - self.offset_down = 0; - self.offset_left = 0; + self.offset_top = 0 + self.offset_down = 0 + self.offset_left = 0 self.offset_right = 0 - self.vacuum_state = "cleaning"; + self.vacuum_state = "cleaning" self.image_auto_zoom = True - self.image_ref_width = 0; + self.image_ref_width = 0 self.image_ref_height = 0tests/test_rand_to_hypfer_compression.py (2)
14-67: Compression function is well-implemented with correct run-length encoding.The algorithm correctly groups consecutive x pixels on the same row into runs. The coordinate transformation and run building logic are sound.
Minor note:
prev_y(line 61) is assigned but never used since row changes are detected viay == run_y.🔎 Remove unused variable
- prev_x, prev_y = x, y + prev_x = x
70-76: Importosshould be at module level.Moving the
osimport to the top of the file with other imports follows PEP 8 conventions and makes dependencies clearer.""" import json +import os def compress_rand_to_hypfer( ... def main(): """Test the compression on segment 20 from rand.json.""" - - # Load rand.json - import os script_dir = os.path.dirname(os.path.abspath(__file__))tests/benchmark_margins.py (1)
99-101: Unusedauto_cropinstance.The
AutoCropinstance is created but never used in the benchmark. Either remove it or use it for benchmarking the actual AutoCrop methods.# Create AutoCrop instance handler = DummyHandler() - auto_crop = AutoCrop(handler)tests/analyze_room12.py (3)
25-26: Add error handling for missing test data file.Unlike
analyze_segment_walls.pywhich checks if the file exists before opening, this script will raise an unhandledFileNotFoundErroriftest.jsonis missing.🔎 Proposed fix
+ if not os.path.exists(test_data_path): + _LOGGER.error(f"Test data file not found: {test_data_path}") + return + with open(test_data_path, "r", encoding="utf-8") as file: data = json.load(file)
79-79: Move import to top of file for clarity.Importing
scipy.ndimageinside the function is unusual. For a dedicated analysis script, moving this import to the top-level (alongsidenumpy) would fail fast if the dependency is missing and improve readability.
100-106: Consider documenting the magic number thresholds.The hardcoded coordinate thresholds (3350, 3345, 2540, 2535) lack context. A brief comment explaining what these values represent would help future maintainers understand the debugging intent.
tests/analyze_segment_walls.py (2)
126-134: Consider optimizing adjacency search with set-based lookup.The nested loop over
segment_coordsandwall_coordshas O(n*m) complexity. For large maps, this could be slow. Convertingwall_coordsto a set and checking the 8-neighborhood would reduce complexity to O(n).🔎 Proposed optimization
# Find segment pixels that are adjacent to wall pixels + wall_set = set(wall_coords) for sx, sy in segment_coords: - for wx, wy in wall_coords: - # Check if the segment pixel is adjacent to the wall pixel - if abs(sx - wx) <= 1 and abs(sy - wy) <= 1: - adjacent_pairs.append(((sx, sy), (wx, wy))) - break + # Check 8-neighborhood for wall pixels + for dx in (-1, 0, 1): + for dy in (-1, 0, 1): + neighbor = (sx + dx, sy + dy) + if neighbor in wall_set: + adjacent_pairs.append(((sx, sy), neighbor)) + break + else: + continue + break return adjacent_pairs
60-65: Triplet bounds check is correct but could use<=.The condition
i + 2 < len(compressed_pixels)works but is subtly confusing. Usingi + 3 <= len(compressed_pixels)or simply relying on the range would be clearer, since you need indicesi,i+1, andi+2to all be valid.tests/convert_rand_to_hypfer.py (2)
17-18: Remove unused variableprev_y.The variable
prev_yis assigned on line 33 but never read. It can be removed to avoid confusion.🔎 Proposed fix
- prev_x = prev_y = None + prev_x = None run_start_x = run_y = NoneAnd on line 33:
- prev_x, prev_y = x, y + prev_x = xAlso applies to: 33-33
46-55: Consider incremental computation to reduce memory usage.Expanding all pixel runs into individual coordinates (lines 52-55) can consume significant memory for large segments. The min/max/avg statistics could be computed incrementally without storing all coordinates.
tests/tests/test_robot_angles.py (1)
1-6: Consider renaming to avoid pytest collection.This script uses print-based output rather than assertions and is not a proper pytest test. The
test_prefix in the filename and function names may cause pytest to attempt collection. Consider renaming toanalyze_robot_angles.pyor similar.backups/test_old_pars.py (1)
53-107: Recursive parsing could cause stack overflow on malformed data.
parse_blockuses tail recursion (line 107) to process sequential blocks. With malformed data containing very small or zero-length blocks, this could exceed Python's recursion limit. Consider an iterative approach if this code is used in production.🔎 Iterative alternative
@staticmethod def parse_block( buf: bytes, offset: int, result: Optional[Dict[int, Any]] = None, pixels: bool = False, ) -> Dict[int, Any]: """Parse a block of data from the map data.""" result = result or {} - if len(buf) <= offset: - return result - - # ... parsing logic ... - - return RRMapParser.parse_block(buf, offset + length + hlength, result, pixels) + while offset < len(buf): + # ... parsing logic ... + offset = offset + length + hlength + return resultbackups/drawable_ori.py (1)
470-476: Unused grid allocation and slow nested loop despite docstring claiming vectorized operations.Line 471 creates
y_indices, x_indicesusingnp.mgridbut they are never used - the code immediately iterates with nested Python loops callingpoint_insidefor each pixel. This is both wasteful (allocates unused arrays) and slow (O(n×m×k) where k is polygon vertices).🔎 Proposed fix - remove unused allocation
# Create a mask for the polygon region mask = np.zeros((max_y - min_y + 1, max_x - min_x + 1), dtype=bool) # Adjust points to the mask's coordinate system adjusted_points = [(p[0] - min_x, p[1] - min_y) for p in points] - # Create a grid of coordinates and use it to test all points at once - y_indices, x_indices = np.mgrid[0 : mask.shape[0], 0 : mask.shape[1]] - # Test each point in the grid for i in range(mask.shape[0]): for j in range(mask.shape[1]): mask[i, j] = Drawable.point_inside(j, i, adjusted_points)new_tests/IMPLEMENTATION_SUMMARY.md (1)
161-165: Remove hardcoded local path for portability.The command includes a user-specific path
/Users/sandro/PycharmProjects/.... Use a relative path or generic placeholder.🔎 Suggested fix
### Run All Tests ```bash -cd /Users/sandro/PycharmProjects/Python-package-valetudo-map-parser -.venv/bin/python -m pytest new_tests/ +# From repository root: +python -m pytest new_tests/ ```backups/refactored_old_code.py (1)
1-44: Consider removing backup file from version control.This file contains only commented-out legacy code. Version control history already preserves deleted code, making this backup redundant. Consider removing it or adding the
backups/directory to.gitignore.tests/test_parser.py (3)
10-16: Path manipulation and import approach is fragile.The
sys.path.insert(0, str(scr_path))combined with imports frombackupsandvaletudo_map_parser.configdirectories makes this test dependent on a specific directory structure. Consider using a proper package setup or pytest fixtures for cleaner test isolation.
31-53: Hardcoded payload filename and early silent return may cause test failures to go unnoticed.The hardcoded filename
map_data_20250728_185945.binand the earlyreturnwithout raising an exception means if the file is missing, the test silently passes. For a test file, consider raising an assertion or usingpytest.skip()to make missing test data explicit.🔎 Suggested improvement
if not payload_path: print(f"Could not find payload file: {payload_file}") print("Tried these locations:") for path in possible_paths: print(f" - {path}") - return + raise FileNotFoundError(f"Could not find payload file: {payload_file}")
198-205: Hardcoded output path may fail iftests/directory doesn't exist.The output file path
tests/test_rand256.jsonassumes thetests/directory exists. Consider usingos.makedirswithexist_ok=Trueor writing relative to__file__.🔎 Suggested fix
# Save results to JSON file - output_file = "tests/test_rand256.json" + output_file = Path(__file__).parent / "test_rand256.json" try: + output_file.parent.mkdir(parents=True, exist_ok=True) with open(output_file, "w") as f: json.dump(results, f, indent=2, default=str)tests/test_room_store.py (2)
49-136: Missing cleanup of RoomStore singleton instances between test runs.The
RoomStoresingleton instances created duringtest_room_store_singleton()persist intotest_room_store_edge_cases()and subsequent test runs. This can cause test pollution and flaky tests. Consider adding a cleanup mechanism.🔎 Suggested improvement
+def cleanup_room_store(): + """Clear all RoomStore instances for test isolation.""" + RoomStore._instances.clear() + + def test_room_store_singleton(): """Test that RoomStore maintains singleton per vacuum_id.""" + cleanup_room_store() _LOGGER.info("=" * 60)
260-262: Tests run sequentially without pytest integration.Running tests via
if __name__ == "__main__"bypasses pytest's test discovery, fixtures, and reporting. Consider converting to proper pytest test functions or adding pytest markers.tests/debug_binary.py (1)
162-168: Silent failure when payload file is not found.The script silently returns when the payload file doesn't exist, which could lead to confusion. Consider adding a more explicit error or usage message.
tests/test_status_text_performance.py (1)
10-11: Duplicate import ofsysmodule.
sysis imported on line 10 and again on line 114. Remove the duplicate import on line 114.tests/compare_payloads.py (1)
93-105: Hardcoded payload paths reduce test portability.The payload file paths are hardcoded with specific filenames and relative paths. Consider using a configuration file or environment variable for test data locations.
tests/rand_rooms_test.py (2)
125-197: Unused method_process_segment_datais dead code.The
_process_segment_datamethod is defined but never called anywhere in the file. The room processing logic inasync_extract_room_properties(lines 236-278) duplicates much of this functionality. Consider either removing this dead code or refactoring to use it.
360-368: Deprecated asyncio pattern for running async code.The
asyncio.new_event_loop()/set_event_loop()/run_until_complete()pattern is deprecated. Useasyncio.run()instead.🔎 Proposed fix
def __main__(): """Main function.""" - loop = asyncio.new_event_loop() - asyncio.set_event_loop(loop) - - try: - loop.run_until_complete(test_rand_rooms_handler()) - finally: - loop.close() + asyncio.run(test_rand_rooms_handler())tests/rooms_test.py (1)
137-153: Missing exception handling inconvex_hull_outlinecompared to similar implementation.The
convex_hull_outlinefunction inrand_rooms_test.py(lines 94-123) includes a try-except block with a bounding box fallback. This implementation lacks that protection, which could cause uncaught exceptions for degenerate point configurations.🔎 Suggested improvement
def convex_hull_outline(mask: np.ndarray) -> list[tuple[int, int]]: y_indices, x_indices = np.where(mask > 0) if len(x_indices) == 0 or len(y_indices) == 0: return [] points = np.column_stack((x_indices, y_indices)) if len(points) < 3: return [(int(x), int(y)) for x, y in points] - hull = ConvexHull(points) - hull_points = [ - (int(points[vertex][0]), int(points[vertex][1])) for vertex in hull.vertices - ] - if hull_points[0] != hull_points[-1]: - hull_points.append(hull_points[0]) - return hull_points + try: + hull = ConvexHull(points) + hull_points = [ + (int(points[vertex][0]), int(points[vertex][1])) for vertex in hull.vertices + ] + if hull_points[0] != hull_points[-1]: + hull_points.append(hull_points[0]) + return hull_points + except Exception: + # Fallback to bounding box + x_min, y_min = np.min(points, axis=0) + x_max, y_max = np.max(points, axis=0) + return [ + (int(x_min), int(y_min)), + (int(x_max), int(y_min)), + (int(x_max), int(y_max)), + (int(x_min), int(y_max)), + (int(x_min), int(y_min)), + ]tests/test_floor_data.py (2)
6-27: Consider importing actual classes instead of replicating them.The test replicates
TrimsDataandFloorDatalocally rather than importing fromvaletudo_map_parser.config.types. This creates a risk that tests pass while the actual implementation differs. If the goal is to test the public API exports (per PR objectives), import the real types:-# Replicate TrimsData for testing -@dataclass -class TrimsData: - floor: str = "" - trim_up: int = 0 - trim_left: int = 0 - trim_down: int = 0 - trim_right: int = 0 - - @classmethod - def from_list(cls, crop_area: List[int], floor: Optional[str] = None): - return cls( - trim_up=crop_area[0], - trim_left=crop_area[1], - trim_down=crop_area[2], - trim_right=crop_area[3], - floor=floor or "", - ) - - def to_dict(self): - return asdict(self) +from valetudo_map_parser import FloorData, TrimsDataThis also validates that the public exports work correctly.
212-231: Consider using pytest for consistency with the test suite.This file uses a custom test runner with
new_tests/usepytest. Converting to pytest would provide better CI integration, clearer failure reporting, and consistency.🔎 Example conversion
import pytest from valetudo_map_parser import FloorData, TrimsData def test_trims_data_from_list(): """Test TrimsData.from_list() with crop_area.""" crop_area = [790, 490, 3209, 2509] trims = TrimsData.from_list(crop_area, floor="Ground Floor") assert trims.trim_up == 790 assert trims.trim_left == 490 # ... etctests/test_hypfer_profiling.py (2)
21-26: Fragile path manipulation for imports.Using
sys.path.insertwith relative paths is brittle and can break depending on the working directory. Consider using package-relative imports or configuringPYTHONPATHin test setup instead.-# Add the parent directory to the path so we can import the modules -sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..")) - -from SCR.valetudo_map_parser.config.shared import CameraSharedManager -from SCR.valetudo_map_parser.hypfer_handler import HypferMapImageHandler +from valetudo_map_parser.config.shared import CameraSharedManager +from valetudo_map_parser.hypfer_handler import HypferMapImageHandlerThis requires proper package installation (e.g.,
pip install -e .).
259-277: Useasyncio.run()for cleaner event loop handling.The manual event loop creation pattern is verbose and outdated. Modern Python (3.7+) provides
asyncio.run()which handles loop lifecycle automatically.🔎 Suggested refactor
- loop = asyncio.new_event_loop() - asyncio.set_event_loop(loop) - # Legacy cProfile for compatibility profiler = cProfile.Profile() profiler.enable() try: if test.profiler: test.profiler.take_memory_snapshot("Test Start") - loop.run_until_complete(test.test_image_handler()) + asyncio.run(test.test_image_handler()) if test.profiler: test.profiler.take_memory_snapshot("Test Complete") finally: profiler.disable() - loop.close()backups/hypfer_rooms_handler.py (2)
68-93: Consider makingrdpnon-async for better performance.The
rdpmethod is markedasyncbut performs no actual async I/O—only recursive calls. This adds unnecessary coroutine overhead. For CPU-bound recursive algorithms like RDP, a regular function is more efficient:- async def rdp( + def rdp( self, points: List[Tuple[int, int]], epsilon: float ) -> List[Tuple[int, int]]: """Ramer-Douglas-Peucker algorithm for simplifying a curve.""" if len(points) <= 2: return points # ... if dmax > epsilon: - first_segment = await self.rdp(points[: index + 1], epsilon) - second_segment = await self.rdp(points[index:], epsilon) + first_segment = self.rdp(points[: index + 1], epsilon) + second_segment = self.rdp(points[index:], epsilon)If kept async, note that deep recursion on complex contours could hit Python's recursion limit.
364-379: Potential edge case inpoint_in_polygonwith degenerate edges.If an edge has
p1x == p2xandp1y != p2y,xintersis never assigned but used on line 376. The current condition flow should prevent this, but the logic is fragile. Consider a cleaner guard:if p1y != p2y: xinters = (y - p1y) * (p2x - p1x) / (p2y - p1y) + p1x - if p1x == p2x or x <= xinters: + if x <= xinters: + inside = not inside + elif p1x == p2x: # Vertical edge inside = not insideThis makes the vertical edge case explicit and avoids relying on
xintersstate.backups/hypfer_handler_ori.py (2)
233-237: Unused variableroom_elementafter element map update comment.The variable
room_elementis assigned but never used. The comment suggests it should mark the room in the element map, but the implementation is missing.🔎 Proposed fix
Either implement the element map update or remove the dead code:
- # Update element map for this layer - if is_room_layer and 0 < room_id <= 15: - # Mark the room in the element map - room_element = getattr( - DrawableElement, f"ROOM_{room_id}", None - )
440-440: Deleting loop variable inside loop iteration is unnecessary.Deleting
calibration_pointandrotation_angleat the end of the loop/function has no practical benefit since they go out of scope immediately after the return statement.🔎 Proposed fix
for vacuum_point, map_point in zip(vacuum_points, map_points): calibration_point = {"vacuum": vacuum_point, "map": map_point} calibration_data.append(calibration_point) - del vacuum_points, map_points, calibration_point, rotation_angle # free memory. return calibration_databackups/rand25_handler_rooms.py (2)
263-267: Incomplete implementation marked with pass statement.The comment indicates this is a simplification and the real implementation would need to identify exact pixels for each room. Consider adding a TODO or implementing the functionality.
Would you like me to help implement the element map update for room pixels, or should this be tracked as a separate issue?
430-430: Unnecessarydelstatement for loop variables.Deleting
room,corners,robot_x,robot_yafter return provides no memory benefit as they're already local variables that will be garbage collected.🔎 Proposed fix
_LOGGER.debug("%s is in %s", self.file_name, self.robot_in_room["room"]) - del room, corners, robot_x, robot_y # free memory. return tempbackups/colors.py (2)
270-274: Unnecessaryenumeratewhen index is unused.The loop uses
enumeratebut discards the index with_. Simply iterate directly over the zip.🔎 Proposed fix
result = [] - for _, (alpha, rgb) in enumerate(zip(alphas, rgb_colors)): + for alpha, rgb in zip(alphas, rgb_colors): if rgb is None: result.append((0, 0, 0, int(alpha))) else: result.append((rgb[0], rgb[1], rgb[2], int(alpha)))
575-580: Potential precision issue in alpha ratio calculation.The alpha ratio calculation
fg_alpha / out_alpha[valid_alpha]can produce unexpected results whenout_alphais very small but above the 0.0001 threshold. Consider using a slightly larger threshold or documenting the precision trade-off.The current implementation handles the edge case with
valid_alpha = out_alpha > 0.0001, which is reasonable for most use cases.backups/drawable.ori (1)
906-908: PotentialAttributeErrorifpath_fontis empty string.When
path_fontis an empty string"",path_font.endswith("VT.ttf")returnsFalse, but the logic at line 908 assumespath_fontis truthy. However, ifpath_fontisNone, line 894 would have handled it. The issue is that an empty string passes theif path_font:check but may not be a valid font path.🔎 Proposed fix
# Use provided font directly if available; else fall back to default user_font = default_font - if path_font: + if path_font and Path(path_font).exists(): try: user_font = ImageFont.truetype(str(path_font), size) except OSError: user_font = default_fontbackups/map_data_ori.py (2)
19-22: Code duplication:sublistfunction exists in multiple files.This exact implementation exists in at least 3 other locations:
tests/rooms_test.py:132-133backups/hypfer_rooms_handler.py:42-43tests/rand_rooms_test.py:68-70Consider extracting to a shared utility module to maintain DRY principles.
🔎 Proposed refactor
Create a shared utility and import it:
# In a shared utils module (e.g., config/utils.py) def sublist(lst: list, n: int) -> list: """Split a list into chunks of specified size.""" return [lst[i : i + n] for i in range(0, len(lst), n)]Then import from the shared location in all files that need it.
282-283: Use modern Union syntax with|or properUniontype.The return type hint
list or Noneis not valid Python type annotation syntax. Uselist | None(Python 3.10+) orOptional[list].🔎 Proposed fix
@staticmethod - def get_rrm_goto_predicted_path(json_data: JsonType) -> list or None: + def get_rrm_goto_predicted_path(json_data: JsonType) -> list | None: """Get the predicted path data from the json."""Apply the same fix to lines 324, 402, and 492.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (13)
tests/map_data_20250728_185945.binis excluded by!**/*.bintests/map_data_20250728_193950.binis excluded by!**/*.bintests/map_data_20250728_194519.binis excluded by!**/*.bintests/map_data_20250728_204538.binis excluded by!**/*.bintests/map_data_20250728_204552.binis excluded by!**/*.bintests/map_data_20250729_084141.binis excluded by!**/*.bintests/map_data_20251002_165115.binis excluded by!**/*.bintests/map_data_20251002_165249.binis excluded by!**/*.bintests/map_data_20251002_165250.binis excluded by!**/*.bintests/map_data_20251002_165251.binis excluded by!**/*.bintests/map_data_20251002_165252.binis excluded by!**/*.bintests/map_data_20251002_165253.binis excluded by!**/*.bintests/map_data_20251003_134705.binis excluded by!**/*.bin
📒 Files selected for processing (58)
.github/workflows/renovate.json(1 hunks)SCR/valetudo_map_parser/__init__.py(4 hunks)SCR/valetudo_map_parser/config/optimized_element_map.py(0 hunks)SCR/valetudo_map_parser/rand256_handler.py(1 hunks)backups/colors.py(1 hunks)backups/drawable.ori(1 hunks)backups/drawable_ori.py(1 hunks)backups/hypfer_handler_ori.py(1 hunks)backups/hypfer_rooms_handler.py(1 hunks)backups/map_data_ori.py(1 hunks)backups/rand25_handler_rooms.py(1 hunks)backups/refactored_old_code.py(1 hunks)backups/test_old_pars.py(1 hunks)examples/async_get_pil_image_example.py(1 hunks)new_tests/FIXES_APPLIED.md(1 hunks)new_tests/IMPLEMENTATION_SUMMARY.md(1 hunks)new_tests/README.md(1 hunks)new_tests/TEST_RESULTS_SUMMARY.md(1 hunks)new_tests/__init__.py(1 hunks)new_tests/config/__init__.py(1 hunks)new_tests/config/test_colors.py(1 hunks)new_tests/config/test_drawable.py(1 hunks)new_tests/config/test_shared.py(1 hunks)new_tests/config/test_status_text.py(1 hunks)new_tests/config/test_types.py(1 hunks)new_tests/conftest.py(1 hunks)new_tests/handlers/__init__.py(1 hunks)new_tests/integration/__init__.py(1 hunks)new_tests/integration/test_basic_integration.py(1 hunks)new_tests/pytest.ini(1 hunks)new_tests/test_map_data.py(1 hunks)pyproject.toml(1 hunks)tests/PROFILING_README.md(1 hunks)tests/RAND_TO_HYPFER_COMPRESSION_RESULTS.md(1 hunks)tests/VALETUDO_MAP_PARSER_TYPES_USAGE_REPORT.md(1 hunks)tests/analyze_room12.py(1 hunks)tests/analyze_room_connections.py(1 hunks)tests/analyze_segment_walls.py(1 hunks)tests/benchmark_margins.py(1 hunks)tests/compare_payloads.py(1 hunks)tests/convert_rand_to_hypfer.py(1 hunks)tests/debug_binary.py(1 hunks)tests/debug_manage_rooms.py(1 hunks)tests/glossyhardtofindnarwhal.json(1 hunks)tests/profiling_requirements.txt(1 hunks)tests/rand_rooms_test.py(1 hunks)tests/rooms_test.py(1 hunks)tests/test_floor_data.py(1 hunks)tests/test_hypfer_profiling.py(1 hunks)tests/test_mvcrender.py(1 hunks)tests/test_parser.py(1 hunks)tests/test_parser_comparison.py(1 hunks)tests/test_rand_to_hypfer_compression.py(1 hunks)tests/test_room_store.py(1 hunks)tests/test_status_text_performance.py(1 hunks)tests/tests/comparison.txt(1 hunks)tests/tests/test_all_bins.py(1 hunks)tests/tests/test_robot_angles.py(1 hunks)
💤 Files with no reviewable changes (1)
- SCR/valetudo_map_parser/config/optimized_element_map.py
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-12-19T10:03:44.321Z
Learnt from: sca075
Repo: sca075/Python-package-valetudo-map-parser PR: 29
File: SCR/valetudo_map_parser/hypfer_draw.py:438-462
Timestamp: 2025-12-19T10:03:44.321Z
Learning: In SCR/valetudo_map_parser/hypfer_draw.py, the `_check_room_with_corners` method correctly sets `self.img_h.robot_in_room` before validation. This state mutation pattern during room iteration is intentional and works correctly with the coordinate system used by the Hypfer handler, similar to the Rand256 handler pattern.
Applied to files:
tests/tests/comparison.txtbackups/hypfer_handler_ori.pybackups/rand25_handler_rooms.pynew_tests/integration/test_basic_integration.pybackups/refactored_old_code.pytests/rand_rooms_test.pybackups/hypfer_rooms_handler.py
📚 Learning: 2025-12-19T06:48:40.379Z
Learnt from: sca075
Repo: sca075/Python-package-valetudo-map-parser PR: 29
File: SCR/valetudo_map_parser/rand256_handler.py:445-461
Timestamp: 2025-12-19T06:48:40.379Z
Learning: In SCR/valetudo_map_parser/rand256_handler.py, the bounding box comparison `self.robot_in_room["right"] <= int(robot_x) <= self.robot_in_room["left"]` is correct for the Rand256 coordinate system, which uses inverted or special axis conventions compared to standard Cartesian coordinates. The "left" value can be larger than "right" in this coordinate system.
Applied to files:
SCR/valetudo_map_parser/rand256_handler.pybackups/hypfer_rooms_handler.py
🧬 Code graph analysis (23)
tests/test_rand_to_hypfer_compression.py (1)
tests/convert_rand_to_hypfer.py (1)
main(273-314)
tests/convert_rand_to_hypfer.py (1)
tests/test_rand_to_hypfer_compression.py (1)
main(70-214)
tests/test_status_text_performance.py (2)
SCR/valetudo_map_parser/config/status_text/status_text.py (2)
StatusText(19-123)get_status_text(108-123)SCR/valetudo_map_parser/config/shared.py (1)
vacuum_bat_charged(130-142)
tests/debug_binary.py (1)
backups/test_old_pars.py (2)
RRMapParser(24-412)parse_data(396-403)
SCR/valetudo_map_parser/__init__.py (2)
tests/test_floor_data.py (2)
FloorData(31-43)TrimsData(8-26)SCR/valetudo_map_parser/config/types.py (2)
FloorData(347-363)TrimsData(289-343)
new_tests/conftest.py (3)
tests/test_floor_data.py (1)
CameraShared(47-52)SCR/valetudo_map_parser/config/shared.py (2)
CameraSharedManager(250-358)get_instance(347-352)SCR/valetudo_map_parser/config/types.py (1)
clear(336-343)
tests/test_room_store.py (2)
tests/rooms_test.py (4)
RoomStore(38-68)get_rooms(54-55)set_rooms(57-58)get_all_instances(67-68)SCR/valetudo_map_parser/config/types.py (1)
room_names(156-177)
tests/rooms_test.py (3)
backups/hypfer_handler_ori.py (1)
async_extract_room_properties(72-95)tests/analyze_room12.py (1)
main(20-114)tests/analyze_room_connections.py (1)
main(23-198)
tests/test_hypfer_profiling.py (2)
SCR/valetudo_map_parser/config/shared.py (2)
CameraSharedManager(250-358)get_instance(347-352)backups/hypfer_handler_ori.py (1)
HypferMapImageHandler(38-477)
new_tests/config/test_drawable.py (2)
backups/drawable_ori.py (3)
Drawable(27-906)create_empty_image(43-48)from_json_to_image(51-100)SCR/valetudo_map_parser/config/drawable_elements.py (1)
DrawableElement(21-52)
new_tests/test_map_data.py (1)
new_tests/conftest.py (1)
hypfer_json_data(47-50)
tests/compare_payloads.py (1)
backups/test_old_pars.py (2)
RRMapParser(24-412)parse_data(396-403)
tests/test_mvcrender.py (2)
backups/colors.py (1)
sample_and_blend_color(459-521)tests/benchmark_margins.py (1)
DummyHandler(11-36)
new_tests/integration/test_basic_integration.py (5)
SCR/valetudo_map_parser/config/shared.py (1)
get_instance(347-352)backups/hypfer_handler_ori.py (1)
HypferMapImageHandler(38-477)backups/rand25_handler_rooms.py (1)
ReImageHandler(44-492)tests/rooms_test.py (2)
RoomStore(38-68)get_rooms(54-55)SCR/valetudo_map_parser/config/utils.py (1)
async_get_image(90-134)
tests/analyze_room12.py (2)
tests/analyze_room_connections.py (1)
main(23-198)tests/rooms_test.py (1)
main(325-338)
new_tests/config/test_status_text.py (1)
SCR/valetudo_map_parser/config/status_text/status_text.py (2)
StatusText(19-123)get_status_text(108-123)
tests/analyze_room_connections.py (2)
tests/analyze_room12.py (1)
main(20-114)tests/rooms_test.py (1)
main(325-338)
new_tests/config/test_shared.py (1)
SCR/valetudo_map_parser/config/shared.py (9)
vacuum_bat_charged(130-142)_compose_obstacle_links(145-167)update_user_colors(169-171)update_rooms_colors(177-179)reset_trims(185-188)batch_update(190-193)batch_get(195-197)generate_attributes(199-227)get_instance(347-352)
backups/colors.py (1)
SCR/valetudo_map_parser/config/shared.py (2)
update_user_colors(169-171)update_rooms_colors(177-179)
tests/rand_rooms_test.py (3)
SCR/valetudo_map_parser/config/drawable_elements.py (3)
DrawableElement(21-52)DrawingConfig(55-290)is_enabled(171-173)backups/map_data_ori.py (5)
RandImageData(202-499)get_rrm_segments_ids(492-499)get_rrm_image_size(422-430)get_rrm_image_position(433-437)async_get_rrm_segments(452-489)tests/analyze_segment_walls.py (1)
load_test_data(21-33)
backups/hypfer_rooms_handler.py (4)
SCR/valetudo_map_parser/config/drawable_elements.py (2)
DrawableElement(21-52)DrawingConfig(55-290)tests/rooms_test.py (3)
RoomStore(38-68)sublist(133-134)get_rooms(54-55)backups/map_data_ori.py (1)
sublist(20-22)tests/rand_rooms_test.py (1)
sublist(69-71)
tests/test_parser_comparison.py (1)
backups/test_old_pars.py (2)
RRMapParser(24-412)parse_data(396-403)
backups/map_data_ori.py (3)
backups/hypfer_rooms_handler.py (1)
sublist(43-44)tests/rand_rooms_test.py (1)
sublist(69-71)tests/rooms_test.py (1)
sublist(133-134)
🪛 LanguageTool
new_tests/README.md
[grammar] ~105-~105: Ensure spelling is correct
Context: ... - Test one thing per test method - Use parametrized tests for testing multiple inputs - Cle...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
new_tests/TEST_RESULTS_SUMMARY.md
[uncategorized] ~94-~94: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ... documentation ## Recommendations ### High Priority Fixes 1. Fix StatusText constructor sig...
(EN_COMPOUND_ADJECTIVE_INTERNAL)
[grammar] ~116-~116: Use a hyphen to join words.
Context: ...ase coverage** with edge cases and error handling tests 4. Run with coverage ...
(QB_NEW_EN_HYPHEN)
tests/RAND_TO_HYPFER_COMPRESSION_RESULTS.md
[grammar] ~18-~18: Ensure spelling is correct
Context: ...60, 30361, 30362, ...]` ### Compressed Hypfer Format - Compressed values: 543 val...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
tests/PROFILING_README.md
[grammar] ~45-~45: Ensure spelling is correct
Context: ...h cd tests python test_rand.py ### Hypfer Vacuum Profilingbash cd tests pytho...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
new_tests/IMPLEMENTATION_SUMMARY.md
[grammar] ~117-~117: Ensure spelling is correct
Context: ...oint generation ## Test Data Used ### Hypfer Vacuum (JSON) - test.json - Main test...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
tests/VALETUDO_MAP_PARSER_TYPES_USAGE_REPORT.md
[style] ~325-~325: To make your text as clear as possible to all readers, do not use this foreign term. Possible alternatives are “not publicly”, “privately”, “secretly”, “confidentially”, and “behind closed doors”. If you mean the English word ‘camera’, a pronoun or a determiner is probably missing before the word.
Context: ...m Impact:** - ColorsManagement - Used in camera.py - RRMapParser - Used in decompress...
(IN_CAMERA)
🪛 markdownlint-cli2 (0.18.1)
new_tests/TEST_RESULTS_SUMMARY.md
46-46: Strong style
Expected: asterisk; Actual: underscore
(MD050, strong-style)
46-46: Strong style
Expected: asterisk; Actual: underscore
(MD050, strong-style)
tests/PROFILING_README.md
54-54: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
67-67: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
76-76: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
tests/VALETUDO_MAP_PARSER_TYPES_USAGE_REPORT.md
226-226: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
232-232: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
238-238: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
244-244: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
251-251: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
256-256: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
337-337: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
372-372: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
| "repositories": [ | ||
| "renovatebot/github-action", | ||
| "renovate-tests/cocoapods1", | ||
| "renovate-tests/gomod1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Repository list contains incorrect repositories.
The repositories list contains Renovate test repositories and the Renovate GitHub Action repo, not this project's repository. This configuration will not work for managing dependencies in this project.
🔎 Proposed fix
"repositories": [
- "renovatebot/github-action",
- "renovate-tests/cocoapods1",
- "renovate-tests/gomod1"
+ "sca075/Python-package-valetudo-map-parser"
],📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "repositories": [ | |
| "renovatebot/github-action", | |
| "renovate-tests/cocoapods1", | |
| "renovate-tests/gomod1" | |
| "repositories": [ | |
| "sca075/Python-package-valetudo-map-parser" | |
| ], |
🤖 Prompt for AI Agents
.github/workflows/renovate.json around lines 9-12: the "repositories" array
contains unrelated Renovate test repos and the renovatebot action repo; replace
this with the correct repository identifier(s) for this project (e.g.,
"org/name" for this repo) or remove the "repositories" field entirely to let
Renovate target the current repository by default; update the JSON so it lists
only valid repository slugs for which you intend Renovate to run and validate
the file with a JSON linter after editing.
|
|
||
| # Avoid division by zero | ||
| if out_alpha < 0.0001: | ||
| return Color[0, 0, 0, 0] # Fully transparent result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Syntax error: Using square brackets instead of parentheses for tuple.
Color[0, 0, 0, 0] will raise a TypeError because Color is a type alias, not a subscriptable object. This should be a tuple.
🔎 Proposed fix
# Avoid division by zero
if out_alpha < 0.0001:
- return Color[0, 0, 0, 0] # Fully transparent result
+ return (0, 0, 0, 0) # Fully transparent result🤖 Prompt for AI Agents
In backups/colors.py around line 440, the code returns Color[0, 0, 0, 0] which
is a syntax/type error because Color is not subscriptable; replace that
expression with a proper tuple literal representing the color, e.g. return (0,
0, 0, 0), ensuring the returned value matches the Color type alias (adjust to
Color(...) only if Color is a callable constructor).
| # Use scipy.ndimage.map_coordinates for efficient sampling | ||
| # This is much faster than looping through coordinates | ||
| for channel in range(4): | ||
| # Sample this color channel for all valid coordinates at once |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Variable shadowing: width used as both image dimension and loop variable.
The variable width from height, width = image.shape[:2] is shadowed by for width in set(widths), which will cause incorrect bounds checking inside the loop.
🔎 Proposed fix
elif element_type == "line":
# Extract line endpoints
lines = []
widths = []
...
# Process lines with the same width together
- for width in set(widths):
+ for line_width in set(widths):
same_width_lines = [
- lines[i] for i in range(len(lines)) if widths[i] == width
+ lines[i] for i in range(len(lines)) if widths[i] == line_width
]
if same_width_lines:
# Create a combined mask for all lines with this width
- mask = np.zeros((height, width), dtype=bool)
+ mask = np.zeros((height, img_width), dtype=bool)Note: You'll need to rename the outer width variable to something like img_width at line 729.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In backups/colors.py around lines 729 to 756, the outer variable width (from
height, width = image.shape[:2]) is shadowed by the loop for width in
set(widths), breaking bounds checks; rename the outer width to img_width (update
the assignment at ~line 729 and all subsequent uses in this function/block) and
rename the loop variable to a distinct name (e.g., w or sample_width), and
ensure all bounds checking and indexing use img_width where referring to the
image dimension.
| @staticmethod | ||
| def overlay_robot( | ||
| background_image: NumpyArray, robot_image: NumpyArray, x: int, y: int | ||
| ) -> NumpyArray: | ||
| """ | ||
| Overlay the robot image on the background image at the specified coordinates. | ||
| """ | ||
| robot_height, robot_width, _ = robot_image.shape | ||
| robot_center_x = robot_width // 2 | ||
| robot_center_y = robot_height // 2 | ||
| top_left_x = x - robot_center_x | ||
| top_left_y = y - robot_center_y | ||
| bottom_right_x = top_left_x + robot_width | ||
| bottom_right_y = top_left_y + robot_height | ||
| background_image[top_left_y:bottom_right_y, top_left_x:bottom_right_x] = ( | ||
| robot_image | ||
| ) | ||
| return background_image |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing bounds checking in overlay_robot can cause IndexError.
The function directly assigns to background_image slices without validating that the robot overlay fits within the background boundaries. If the robot is positioned near the edge, top_left_x/y could be negative or bottom_right_x/y could exceed image dimensions.
🔎 Proposed fix with bounds checking
@staticmethod
def overlay_robot(
background_image: NumpyArray, robot_image: NumpyArray, x: int, y: int
) -> NumpyArray:
"""
Overlay the robot image on the background image at the specified coordinates.
"""
+ bg_height, bg_width = background_image.shape[:2]
robot_height, robot_width, _ = robot_image.shape
robot_center_x = robot_width // 2
robot_center_y = robot_height // 2
top_left_x = x - robot_center_x
top_left_y = y - robot_center_y
bottom_right_x = top_left_x + robot_width
bottom_right_y = top_left_y + robot_height
+
+ # Check if overlay is completely outside the background
+ if top_left_x >= bg_width or top_left_y >= bg_height or bottom_right_x <= 0 or bottom_right_y <= 0:
+ return background_image
+
+ # Calculate clipped coordinates
+ src_x_start = max(0, -top_left_x)
+ src_y_start = max(0, -top_left_y)
+ src_x_end = robot_width - max(0, bottom_right_x - bg_width)
+ src_y_end = robot_height - max(0, bottom_right_y - bg_height)
+
+ dst_x_start = max(0, top_left_x)
+ dst_y_start = max(0, top_left_y)
+ dst_x_end = min(bg_width, bottom_right_x)
+ dst_y_end = min(bg_height, bottom_right_y)
+
- background_image[top_left_y:bottom_right_y, top_left_x:bottom_right_x] = (
- robot_image
+ background_image[dst_y_start:dst_y_end, dst_x_start:dst_x_end] = (
+ robot_image[src_y_start:src_y_end, src_x_start:src_x_end]
)
return background_image📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @staticmethod | |
| def overlay_robot( | |
| background_image: NumpyArray, robot_image: NumpyArray, x: int, y: int | |
| ) -> NumpyArray: | |
| """ | |
| Overlay the robot image on the background image at the specified coordinates. | |
| """ | |
| robot_height, robot_width, _ = robot_image.shape | |
| robot_center_x = robot_width // 2 | |
| robot_center_y = robot_height // 2 | |
| top_left_x = x - robot_center_x | |
| top_left_y = y - robot_center_y | |
| bottom_right_x = top_left_x + robot_width | |
| bottom_right_y = top_left_y + robot_height | |
| background_image[top_left_y:bottom_right_y, top_left_x:bottom_right_x] = ( | |
| robot_image | |
| ) | |
| return background_image | |
| @staticmethod | |
| def overlay_robot( | |
| background_image: NumpyArray, robot_image: NumpyArray, x: int, y: int | |
| ) -> NumpyArray: | |
| """ | |
| Overlay the robot image on the background image at the specified coordinates. | |
| """ | |
| bg_height, bg_width = background_image.shape[:2] | |
| robot_height, robot_width, _ = robot_image.shape | |
| robot_center_x = robot_width // 2 | |
| robot_center_y = robot_height // 2 | |
| top_left_x = x - robot_center_x | |
| top_left_y = y - robot_center_y | |
| bottom_right_x = top_left_x + robot_width | |
| bottom_right_y = top_left_y + robot_height | |
| # Check if overlay is completely outside the background | |
| if top_left_x >= bg_width or top_left_y >= bg_height or bottom_right_x <= 0 or bottom_right_y <= 0: | |
| return background_image | |
| # Calculate clipped coordinates | |
| src_x_start = max(0, -top_left_x) | |
| src_y_start = max(0, -top_left_y) | |
| src_x_end = robot_width - max(0, bottom_right_x - bg_width) | |
| src_y_end = robot_height - max(0, bottom_right_y - bg_height) | |
| dst_x_start = max(0, top_left_x) | |
| dst_y_start = max(0, top_left_y) | |
| dst_x_end = min(bg_width, bottom_right_x) | |
| dst_y_end = min(bg_height, bottom_right_y) | |
| background_image[dst_y_start:dst_y_end, dst_x_start:dst_x_end] = ( | |
| robot_image[src_y_start:src_y_end, src_x_start:src_x_end] | |
| ) | |
| return background_image |
🤖 Prompt for AI Agents
In backups/drawable_ori.py around lines 639 to 656, overlay_robot lacks bounds
checking and will IndexError if the robot extends outside the background; fix by
computing the overlap between background and robot: clamp top_left_x/y to >=0
and bottom_right_x/y to <=background width/height, compute corresponding source
rectangle offsets into robot_image (crop robot_image to the overlap), only write
that cropped region into the background slice, and handle fully-outside cases by
returning the original background unchanged; ensure integer indices and
consistent channel counts before assignment.
| for width in set(widths): | ||
| same_width_lines = [ | ||
| lines[i] for i in range(len(lines)) if widths[i] == width | ||
| ] | ||
| if same_width_lines: | ||
| # Create a combined mask for all lines with this width | ||
| mask = np.zeros((height, width), dtype=bool) | ||
|
|
||
| # Draw all lines into the mask | ||
| for start, end in same_width_lines: | ||
| x1, y1 = start | ||
| x2, y2 = end | ||
|
|
||
| # Skip invalid lines | ||
| if not ( | ||
| 0 <= x1 < width | ||
| and 0 <= y1 < height | ||
| and 0 <= x2 < width | ||
| and 0 <= y2 < height | ||
| ): | ||
| continue | ||
|
|
||
| # Use Bresenham's algorithm to get line points | ||
| length = max(abs(x2 - x1), abs(y2 - y1)) | ||
| if length == 0: | ||
| continue | ||
|
|
||
| t = np.linspace(0, 1, length * 2) | ||
| x_coords = np.round(x1 * (1 - t) + x2 * t).astype(int) | ||
| y_coords = np.round(y1 * (1 - t) + y2 * t).astype(int) | ||
|
|
||
| # Add line points to mask | ||
| for x, y in zip(x_coords, y_coords): | ||
| if width == 1: | ||
| mask[y, x] = True | ||
| else: | ||
| # For thicker lines | ||
| half_width = width // 2 | ||
| min_y = max(0, y - half_width) | ||
| max_y = min(height, y + half_width + 1) | ||
| min_x = max(0, x - half_width) | ||
| max_x = min(width, x + half_width + 1) | ||
|
|
||
| # Create a circular brush | ||
| y_indices, x_indices = np.ogrid[ | ||
| min_y:max_y, min_x:max_x | ||
| ] | ||
| brush = (y_indices - y) ** 2 + ( | ||
| x_indices - x | ||
| ) ** 2 <= half_width**2 | ||
| mask[min_y:max_y, min_x:max_x] |= brush | ||
|
|
||
| # Apply color to all lines at once | ||
| image[mask] = color |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical bug: Variable width shadows image width, causing incorrect mask dimensions and bounds checks.
At line 756, the loop variable width shadows the image width from line 699. This causes:
- Line 762:
np.zeros((height, width), ...)creates a mask with wrong dimensions (using line width instead of image width) - Lines 771-774 and 789-797: Bounds checks compare coordinates against line width instead of image width
🔎 Proposed fix - rename loop variable
# Process lines with the same width together
- for width in set(widths):
+ for line_width in set(widths):
same_width_lines = [
- lines[i] for i in range(len(lines)) if widths[i] == width
+ lines[i] for i in range(len(lines)) if widths[i] == line_width
]
if same_width_lines:
# Create a combined mask for all lines with this width
- mask = np.zeros((height, width), dtype=bool)
+ mask = np.zeros((height, width), dtype=bool)
# Draw all lines into the mask
for start, end in same_width_lines:
x1, y1 = start
x2, y2 = end
# Skip invalid lines
if not (
0 <= x1 < width
and 0 <= y1 < height
and 0 <= x2 < width
and 0 <= y2 < height
):
continue
# Use Bresenham's algorithm to get line points
length = max(abs(x2 - x1), abs(y2 - y1))
if length == 0:
continue
t = np.linspace(0, 1, length * 2)
x_coords = np.round(x1 * (1 - t) + x2 * t).astype(int)
y_coords = np.round(y1 * (1 - t) + y2 * t).astype(int)
# Add line points to mask
for x, y in zip(x_coords, y_coords):
- if width == 1:
+ if line_width == 1:
mask[y, x] = True
else:
# For thicker lines
- half_width = width // 2
+ half_width = line_width // 2
min_y = max(0, y - half_width)
max_y = min(height, y + half_width + 1)
min_x = max(0, x - half_width)
- max_x = min(width, x + half_width + 1)
+ max_x = min(width, x + half_width + 1)
# Create a circular brush
y_indices, x_indices = np.ogrid[
min_y:max_y, min_x:max_x
]
brush = (y_indices - y) ** 2 + (
x_indices - x
) ** 2 <= half_width**2
mask[min_y:max_y, min_x:max_x] |= brush🤖 Prompt for AI Agents
In backups/drawable_ori.py around lines 756-809, the loop variable named "width"
shadows the image width defined at line 699 causing mask size and bounds checks
to be wrong; rename the loop variable to something like "line_width" and update
all uses inside the loop accordingly (use the original image width variable from
line 699 when creating mask and doing bounds checks, e.g. np.zeros((height,
image_width), dtype=bool) and compare x/y against image_width), and replace
thickness-specific uses (comparisons, half_width calculation, and brush
generation) to reference "line_width" so mask dimensions and bound checks use
the actual image width while thickness logic uses the line width.
| # Simulate crop_area from AutoCrop: [left, top, right, bottom] | ||
| crop_area = [790, 490, 3209, 2509] | ||
|
|
||
| trims = TrimsData.from_list(crop_area, floor="Ground Floor") | ||
|
|
||
| print(f"Input crop_area: {crop_area}") | ||
| print(f"Created TrimsData: {trims}") | ||
| print(f" floor: {trims.floor}") | ||
| print(f" trim_up: {trims.trim_up}") | ||
| print(f" trim_left: {trims.trim_left}") | ||
| print(f" trim_down: {trims.trim_down}") | ||
| print(f" trim_right: {trims.trim_right}") | ||
|
|
||
| assert trims.trim_up == 790 | ||
| assert trims.trim_left == 490 | ||
| assert trims.trim_down == 3209 | ||
| assert trims.trim_right == 2509 | ||
| assert trims.floor == "Ground Floor" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find config/types.py and read the TrimsData.from_list implementation
find . -name "types.py" -path "*/config/*" | head -5Repository: sca075/Python-package-valetudo-map-parser
Length of output: 126
🏁 Script executed:
#!/bin/bash
# Read the types.py file to find TrimsData.from_list implementation
rg -n -A 10 "def from_list" ./SCR/valetudo_map_parser/config/types.pyRepository: sca075/Python-package-valetudo-map-parser
Length of output: 807
🏁 Script executed:
#!/bin/bash
# Check if there are any other usages or tests of TrimsData.from_list to understand intended behavior
rg -n "TrimsData.from_list\|from_list.*crop" --type py -B 2 -A 2Repository: sca075/Python-package-valetudo-map-parser
Length of output: 67
🏁 Script executed:
#!/bin/bash
# Search for TrimsData usage and from_list calls more broadly
rg "TrimsData" --type py -B 1 -A 1 | head -50Repository: sca075/Python-package-valetudo-map-parser
Length of output: 3307
🏁 Script executed:
#!/bin/bash
# Read the actual test file that contains the review comment (test_floor_data.py lines 59-76)
cat -n ./tests/test_floor_data.py | sed -n '55,80p'Repository: sca075/Python-package-valetudo-map-parser
Length of output: 1110
Semantic mismatch in TrimsData.from_list mapping.
The comment states crop_area is [left, top, right, bottom], but TrimsData.from_list (config/types.py, lines 324-334) maps indices as:
crop_area[0]→trim_up(should betrim_left)crop_area[1]→trim_left(should betrim_up)crop_area[2]→trim_down(should betrim_right)crop_area[3]→trim_right(should betrim_down)
This contradicts the documented format. TrimCropData.from_list (lines 96-103) maps the same indices correctly: [0→trim_left, 1→trim_up, 2→trim_right, 3→trim_down]. Correct the mapping in TrimsData.from_list and update the test assertions accordingly.
🤖 Prompt for AI Agents
In config/types.py around lines 324-334, TrimsData.from_list currently maps
crop_area indices incorrectly (0→trim_up,1→trim_left,2→trim_down,3→trim_right);
change the mapping to match the documented/expected format [left, top, right,
bottom] so that crop_area[0] → trim_left, crop_area[1] → trim_up, crop_area[2] →
trim_right, crop_area[3] → trim_down; then update tests/test_floor_data.py
around lines 59-76 to assert trims.trim_left == 790, trims.trim_up == 490,
trims.trim_right == 3209, trims.trim_down == 2509 (and keep floor assertion) so
the test matches the corrected mapping.
| # Generate image | ||
| self.image = await handler.get_image_from_json( | ||
| self.test_data, return_webp=False | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Method name mismatch will cause runtime error.
The call uses handler.get_image_from_json(), but HypferMapImageHandler exposes async_get_image_from_json() (per the relevant snippet from backups/hypfer_handler_ori.py). This will raise AttributeError.
# Generate image
- self.image = await handler.get_image_from_json(
+ self.image = await handler.async_get_image_from_json(
self.test_data, return_webp=False
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Generate image | |
| self.image = await handler.get_image_from_json( | |
| self.test_data, return_webp=False | |
| ) | |
| # Generate image | |
| self.image = await handler.async_get_image_from_json( | |
| self.test_data, return_webp=False | |
| ) |
🤖 Prompt for AI Agents
In tests/test_hypfer_profiling.py around lines 222 to 225, the test calls
handler.get_image_from_json but the HypferMapImageHandler implementation exposes
async_get_image_from_json, causing an AttributeError at runtime; fix by calling
the correct async method name (await
handler.async_get_image_from_json(self.test_data, return_webp=False)) or add a
synchronous wrapper/alias named get_image_from_json on the handler that
delegates to async_get_image_from_json so the test uses an available method.
| # Measure function list size | ||
| import sys | ||
| func_list_size = sys.getsizeof(status_text.compose_functions) | ||
| func_ref_size = sum(sys.getsizeof(f) for f in status_text.compose_functions) | ||
|
|
||
| print(f"Function list size: {func_list_size} bytes") | ||
| print(f"Function references: {func_ref_size} bytes") | ||
| print(f"Total overhead: {func_list_size + func_ref_size} bytes (~{(func_list_size + func_ref_size)/1024:.2f} KB)") | ||
| print(f"Number of functions: {len(status_text.compose_functions)}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AttributeError: compose_functions should be _compose_functions.
The StatusText class defines _compose_functions (with leading underscore) as shown in the relevant code snippets. This will cause an AttributeError at runtime.
🔎 Proposed fix
# Measure function list size
import sys
- func_list_size = sys.getsizeof(status_text.compose_functions)
- func_ref_size = sum(sys.getsizeof(f) for f in status_text.compose_functions)
+ func_list_size = sys.getsizeof(status_text._compose_functions)
+ func_ref_size = sum(sys.getsizeof(f) for f in status_text._compose_functions)
print(f"Function list size: {func_list_size} bytes")
print(f"Function references: {func_ref_size} bytes")
print(f"Total overhead: {func_list_size + func_ref_size} bytes (~{(func_list_size + func_ref_size)/1024:.2f} KB)")
- print(f"Number of functions: {len(status_text.compose_functions)}")
+ print(f"Number of functions: {len(status_text._compose_functions)}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Measure function list size | |
| import sys | |
| func_list_size = sys.getsizeof(status_text.compose_functions) | |
| func_ref_size = sum(sys.getsizeof(f) for f in status_text.compose_functions) | |
| print(f"Function list size: {func_list_size} bytes") | |
| print(f"Function references: {func_ref_size} bytes") | |
| print(f"Total overhead: {func_list_size + func_ref_size} bytes (~{(func_list_size + func_ref_size)/1024:.2f} KB)") | |
| print(f"Number of functions: {len(status_text.compose_functions)}") | |
| # Measure function list size | |
| import sys | |
| func_list_size = sys.getsizeof(status_text._compose_functions) | |
| func_ref_size = sum(sys.getsizeof(f) for f in status_text._compose_functions) | |
| print(f"Function list size: {func_list_size} bytes") | |
| print(f"Function references: {func_ref_size} bytes") | |
| print(f"Total overhead: {func_list_size + func_ref_size} bytes (~{(func_list_size + func_ref_size)/1024:.2f} KB)") | |
| print(f"Number of functions: {len(status_text._compose_functions)}") |
🤖 Prompt for AI Agents
In tests/test_status_text_performance.py around lines 113 to 121, the test
references status_text.compose_functions but the StatusText implementation
exposes _compose_functions (leading underscore), causing an AttributeError;
update the test to reference status_text._compose_functions (or use a public
accessor if available) and adjust any related len()/iteration and size
calculations to use that attribute so the test accesses the correct member.
| start = time.perf_counter() | ||
| for _ in range(10000): | ||
| status_text = [f"{shared.file_name}: cleaning"] | ||
| for func in status_text_obj.compose_functions: | ||
| status_text = func(status_text, lang_map) | ||
| end = time.perf_counter() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function signature mismatch will cause TypeError.
The compose functions in StatusText take only current_state as a parameter (e.g., _current_room(self, current_state: list[str])), but this code passes lang_map as a second argument. This will raise a TypeError at runtime.
🔎 Proposed fix
# Measure just the function loop (without translation)
start = time.perf_counter()
for _ in range(10000):
status_text = [f"{shared.file_name}: cleaning"]
for func in status_text_obj.compose_functions:
- status_text = func(status_text, lang_map)
+ status_text = func(status_text)
end = time.perf_counter()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| start = time.perf_counter() | |
| for _ in range(10000): | |
| status_text = [f"{shared.file_name}: cleaning"] | |
| for func in status_text_obj.compose_functions: | |
| status_text = func(status_text, lang_map) | |
| end = time.perf_counter() | |
| start = time.perf_counter() | |
| for _ in range(10000): | |
| status_text = [f"{shared.file_name}: cleaning"] | |
| for func in status_text_obj.compose_functions: | |
| status_text = func(status_text) | |
| end = time.perf_counter() |
🤖 Prompt for AI Agents
In tests/test_status_text_performance.py around lines 132 to 137, the loop calls
compose functions with two arguments (status_text, lang_map) but the
StatusText.compose functions accept only a single parameter (current_state),
causing a TypeError; fix by invoking each compose function with only the current
state (e.g., status_text = func(status_text)) or, if language mapping is
required, update the compose function signatures and all their implementations
to accept (current_state, lang_map) consistently—choose the former for the test
to match existing signatures, and remove the extra lang_map argument from the
calls.
| # Add the SCR directory to Python path | ||
| sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "SCR"))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the directory structure and sys.path issue
echo "=== Repository structure ==="
git ls-files | head -20
echo ""
echo "=== Looking for SCR directory ==="
find . -type d -name "SCR" 2>/dev/null || echo "No SCR directory found at top level"
echo ""
echo "=== Looking for test files ==="
find . -name "*test*.py" -o -name "*tests*" -type d | head -20
echo ""
echo "=== Contents of tests directory structure ==="
ls -la tests/ 2>/dev/null || echo "tests/ not found"
echo ""
if [ -d "tests/tests" ]; then
echo "=== Contents of tests/tests directory ==="
ls -la tests/tests/
echo ""
echo "=== Checking if tests/tests/SCR exists ==="
if [ -d "tests/tests/SCR" ]; then
echo "tests/tests/SCR EXISTS"
else
echo "tests/tests/SCR DOES NOT EXIST"
fi
else
echo "tests/tests/ directory does not exist"
fi
echo ""
echo "=== Check test_robot_angles.py location and content ==="
if [ -f "tests/tests/test_robot_angles.py" ]; then
echo "File found at tests/tests/test_robot_angles.py"
echo "First 15 lines:"
head -15 tests/tests/test_robot_angles.py
fiRepository: sca075/Python-package-valetudo-map-parser
Length of output: 6034
Fix the sys.path modification to navigate to the correct SCR directory.
The path os.path.join(os.path.dirname(__file__), "SCR") resolves to tests/tests/SCR, which does not exist. The SCR directory is located at the repository root. Change the path to navigate up two directory levels: os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..", "SCR")) or use "../../SCR".
🤖 Prompt for AI Agents
In tests/tests/test_robot_angles.py around lines 8 to 9, the sys.path insertion
points to tests/tests/SCR which doesn't exist; update the path to reference the
repository-root SCR by moving up two directories before joining "SCR" (e.g., use
os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..", "SCR")) or
an equivalent "../../SCR" resolution) so the SCR directory is correctly added to
sys.path.
Exposes
FloorDataandTrimsDatatypes in the library's public API to enable the MQTT Vacuum Camera integration to import these types for multi-floor support implementation.Changes
FloorDataandTrimsDatato imports fromconfig.types__all__list for public APIPurpose
These types are needed by the MQTT Vacuum Camera Home Assistant integration to implement multi-floor support with persistent auto-crop trims per floor.
Summary by CodeRabbit
New Features
Bug Fixes & Improvements
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.